299 research outputs found
Superpixels: An Evaluation of the State-of-the-Art
Superpixels group perceptually similar pixels to create visually meaningful
entities while heavily reducing the number of primitives for subsequent
processing steps. As of these properties, superpixel algorithms have received
much attention since their naming in 2003. By today, publicly available
superpixel algorithms have turned into standard tools in low-level vision. As
such, and due to their quick adoption in a wide range of applications,
appropriate benchmarks are crucial for algorithm selection and comparison.
Until now, the rapidly growing number of algorithms as well as varying
experimental setups hindered the development of a unifying benchmark. We
present a comprehensive evaluation of 28 state-of-the-art superpixel algorithms
utilizing a benchmark focussing on fair comparison and designed to provide new
insights relevant for applications. To this end, we explicitly discuss
parameter optimization and the importance of strictly enforcing connectivity.
Furthermore, by extending well-known metrics, we are able to summarize
algorithm performance independent of the number of generated superpixels,
thereby overcoming a major limitation of available benchmarks. Furthermore, we
discuss runtime, robustness against noise, blur and affine transformations,
implementation details as well as aspects of visual quality. Finally, we
present an overall ranking of superpixel algorithms which redefines the
state-of-the-art and enables researchers to easily select appropriate
algorithms and the corresponding implementations which themselves are made
publicly available as part of our benchmark at
davidstutz.de/projects/superpixel-benchmark/
Disentangling Adversarial Robustness and Generalization
Obtaining deep networks that are robust against adversarial examples and
generalize well is an open problem. A recent hypothesis even states that both
robust and accurate models are impossible, i.e., adversarial robustness and
generalization are conflicting goals. In an effort to clarify the relationship
between robustness and generalization, we assume an underlying, low-dimensional
data manifold and show that: 1. regular adversarial examples leave the
manifold; 2. adversarial examples constrained to the manifold, i.e.,
on-manifold adversarial examples, exist; 3. on-manifold adversarial examples
are generalization errors, and on-manifold adversarial training boosts
generalization; 4. regular robustness and generalization are not necessarily
contradicting goals. These assumptions imply that both robust and accurate
models are possible. However, different models (architectures, training
strategies etc.) can exhibit different robustness and generalization
characteristics. To confirm our claims, we present extensive experiments on
synthetic data (with known manifold) as well as on EMNIST, Fashion-MNIST and
CelebA.Comment: Conference on Computer Vision and Pattern Recognition 201
Learning 3D Shape Completion under Weak Supervision
We address the problem of 3D shape completion from sparse and noisy point
clouds, a fundamental problem in computer vision and robotics. Recent
approaches are either data-driven or learning-based: Data-driven approaches
rely on a shape model whose parameters are optimized to fit the observations;
Learning-based approaches, in contrast, avoid the expensive optimization step
by learning to directly predict complete shapes from incomplete observations in
a fully-supervised setting. However, full supervision is often not available in
practice. In this work, we propose a weakly-supervised learning-based approach
to 3D shape completion which neither requires slow optimization nor direct
supervision. While we also learn a shape prior on synthetic data, we amortize,
i.e., learn, maximum likelihood fitting using deep neural networks resulting in
efficient shape completion without sacrificing accuracy. On synthetic
benchmarks based on ShapeNet and ModelNet as well as on real robotics data from
KITTI and Kinect, we demonstrate that the proposed amortized maximum likelihood
approach is able to compete with recent fully supervised baselines and
outperforms data-driven approaches, while requiring less supervision and being
significantly faster
Disentangling Adversarial Robustness and Generalization
Obtaining deep networks that are robust against adversarial examples and
generalize well is an open problem. A recent hypothesis even states that both
robust and accurate models are impossible, i.e., adversarial robustness and
generalization are conflicting goals. In an effort to clarify the relationship
between robustness and generalization, we assume an underlying, low-dimensional
data manifold and show that: 1. regular adversarial examples leave the
manifold; 2. adversarial examples constrained to the manifold, i.e.,
on-manifold adversarial examples, exist; 3. on-manifold adversarial examples
are generalization errors, and on-manifold adversarial training boosts
generalization; 4. regular robustness and generalization are not necessarily
contradicting goals. These assumptions imply that both robust and accurate
models are possible. However, different models (architectures, training
strategies etc.) can exhibit different robustness and generalization
characteristics. To confirm our claims, we present extensive experiments on
synthetic data (with known manifold) as well as on EMNIST, Fashion-MNIST and
CelebA.Comment: Conference on Computer Vision and Pattern Recognition 201
Robustifying Token Attention for Vision Transformers
Despite the success of vision transformers (ViTs), they still suffer from
significant drops in accuracy in the presence of common corruptions, such as
noise or blur. Interestingly, we observe that the attention mechanism of ViTs
tends to rely on few important tokens, a phenomenon we call token overfocusing.
More critically, these tokens are not robust to corruptions, often leading to
highly diverging attention patterns. In this paper, we intend to alleviate this
overfocusing issue and make attention more stable through two general
techniques: First, our Token-aware Average Pooling (TAP) module encourages the
local neighborhood of each token to take part in the attention mechanism.
Specifically, TAP learns average pooling schemes for each token such that the
information of potentially important tokens in the neighborhood can adaptively
be taken into account. Second, we force the output tokens to aggregate
information from a diverse set of input tokens rather than focusing on just a
few by using our Attention Diversification Loss (ADL). We achieve this by
penalizing high cosine similarity between the attention vectors of different
tokens. In experiments, we apply our methods to a wide range of transformer
architectures and improve robustness significantly. For example, we improve
corruption robustness on ImageNet-C by 2.4% while improving accuracy by 0.4%
based on state-of-the-art robust architecture FAN. Also, when fine-tuning on
semantic segmentation tasks, we improve robustness on CityScapes-C by 2.4% and
ACDC by 3.0%. Our code is available at https://github.com/guoyongcs/TAPADL.Comment: To appear in ICCV 202
The fate of xylem-transported CO2 in plants
The concentration of carbon dioxide in tree stems can be ~30-750 times higher than current atmospheric [CO2]. Dissolved inorganic carbon enters the xylem from root and stem respiration and travels with water through the plant. However, the fate of much of this xylem-transported CO2 is unknown. In these studies I examined the fate of xylem-transported CO2 traveling through the petiole and leaf. This was accomplished by placing cut leaves from a woody and herbaceous C3 species, and a Kranz-type C4 species, in a solution of dissolved NaH13CO3 at concentrations similar to those measured in nature. This allowed me to track the efflux of 13CO2 using tunable diode laser absorption spectroscopy and compare this with 12CO2 fluxes derived from plant metabolism.
The objective of the first study was to measure the efflux of xylem-transported CO2 out of the woody species Populus deltoides and the herbaceous C3 species Brassica napus in the dark by testing the relationship among the concertation of bicarbonate in the xylem, the rate of transpiration, and the rate of gross CO2 efflux. I found when the concentration of CO2 in the xylem is high and when the rate of transpiration is also high, the magnitude of 13CO2 efflux can approach half of the rate of respiration in the dark.
The second study extends measurements of the fate of xylem-transported CO2 into lighted conditions where photosynthesis is active. I measured 12CO2 and 13CO2 fluxes across light- and CO2-response curves with the objectives of: 1) determining how much and under what conditions xylem-transported CO2 exited cut leaves in the light, and 2) determining how much xylem-transported CO2 was used for photosynthesis and when the overall contribution to photosynthesis was most important. I found that in the light the contribution of xylem-transported CO2 is most important when intercellular [CO2] is low which occurs under high irradiance and low [CO2].
The last study focused on the efflux and use of xylem-transported CO2 in the Kranz-type C4 species, Amaranthus hypochondriacus. Species with Kranz anatomy have highly active photosynthetic cells surrounding the vascular bundle, which is where xylem-transported CO2 would first interact with photosynthetic cells. The objectives of this study were to determine: 1) the rate and total efflux of xylem-transported CO2 exiting a cut leaf of the Kranz-type C4 species, A. hypochondriacus, in the dark and 2) the rate and contribution of xylem-transported CO2 to total assimilation in the light for A. hypochondriacus. Rates of dark efflux of xylem-transported CO2 out of A. hypochondriacus leaves were lower in the dark compared to rates observed in B. napus across the same rates of transpiration and bicarbonate concentrations. In the light a higher portion of xylem-transported CO2 was used for photosynthesis in A. hypochondriacus compared to B. napus suggesting that Kranz anatomy influences how C4 plants use xylem-transported CO2 for photosynthesis
Bit Error Robustness for Energy-Efficient DNN Accelerators
Deep neural network (DNN) accelerators received considerable attention in
past years due to saved energy compared to mainstream hardware. Low-voltage
operation of DNN accelerators allows to further reduce energy consumption
significantly, however, causes bit-level failures in the memory storing the
quantized DNN weights. In this paper, we show that a combination of robust
fixed-point quantization, weight clipping, and random bit error training
(RandBET) improves robustness against random bit errors in (quantized) DNN
weights significantly. This leads to high energy savings from both low-voltage
operation as well as low-precision quantization. Our approach generalizes
across operating voltages and accelerators, as demonstrated on bit errors from
profiled SRAM arrays. We also discuss why weight clipping alone is already a
quite effective way to achieve robustness against bit errors. Moreover, we
specifically discuss the involved trade-offs regarding accuracy, robustness and
precision: Without losing more than 1% in accuracy compared to a normally
trained 8-bit DNN, we can reduce energy consumption on CIFAR-10 by 20%. Higher
energy savings of, e.g., 30%, are possible at the cost of 2.5% accuracy, even
for 4-bit DNNs
What Drives Engagement in Professional Associations? A National Survey of Occupational Therapy Students
Exploring the factors that influence occupational therapy (OT) and occupational therapy assistant (OTA) students to join and participate in professional associations is critical to determine how to extend engagement after graduation. Previous research on health care student participation in professional associations has not included OT or OTA students. The researchers conducted an online quantitative national pilot survey to explore the perceptions of OT/OTA students and to identify supports and challenges for membership. The purposive sampling of currently enrolled students took place over three months in 2017, resulting in 251 responses representing all geographic regions in the United States. The researcher-developed survey evaluated student perceptions of professional membership challenges and supports at both the state and national levels. There was a statistically significant relationship between students participating in an organized student association and reporting membership in their state and national associations. Students sought out professional association memberships, even when their academic institutions did not provide support. A majority of students indicated that they planned to be American Occupational Therapy Association members after graduation. Students suggested that more economical membership, conference registration, and academic support could encourage active participation and engagement in their professional associations, extending beyond graduation. This study adds the OT student voice to the discussion about professional membership and engagement to the existing literature
- …